In consequential decision-making applications, mitigating unwanted biases in machine learning models that yield systematic disadvantage to members of groups delineated by sensitive attributes such as race and gender is one key intervention to strive for equity. Focusing on demographic parity and equality of opportunity, in this paper we propose an algorithm that improves the fairness of a pre-trained classifier by simply dropping carefully selected training data points. We select instances based on their influence on the fairness metric of interest, computed using an infinitesimal jackknife-based approach. The dropping of training points is done in principle, but in practice does not require the model to be refit. Crucially, we find that such an intervention does not substantially reduce the predictive performance of the model but drastically improves the fairness metric. Through careful experiments, we evaluate the effectiveness of the proposed approach on diverse tasks and find that it consistently improves upon existing alternatives.
translated by 谷歌翻译
随着自动化许多具有高保真性的化学任务的前景,化学语言处理模型正在快速迅速出现。在这里,我们提出了一个基于云的实时平台,该平台允许用户实际上筛选感兴趣的分子。为此,将杠杆化从最近提出的大型化学语言模型(名为Moleformer)推断出来的分子嵌入。该平台目前支持三个任务:最近的邻居检索,化学空间可视化和财产预测。根据该平台的功能并获得的结果,我们认为这样的平台可以在自动化化学和化学工程研究中起关键作用,并协助药物发现和材料设计任务。在\ url {www.ibm.biz/molecular_demo}提供我们平台的演示。
translated by 谷歌翻译
随着各种科学领域中数据的越来越多,生成模型在科学方法的每个步骤中都具有巨大的潜力来加速科学发现。他们最有价值的应用也许在于传统上提出假设最慢,最具挑战性的步骤。现在,正在从大量数据中学到强大的表示形式,以产生新的假设,这对从材料设计到药物发现的科学发现应用产生了重大影响。 GT4SD(https://github.com/gt4sd/gt4sd-core)是一个可扩展的开放源库,使科学家,开发人员和研究人员能够培训和使用科学发现中假设生成的最先进的生成模型。 GT4SD支持跨材料科学和药物发现的各种生成模型的用途,包括基于与目标蛋白,OMIC剖面,脚手架距离,结合能等性质的分子发现和设计。
translated by 谷歌翻译
Models based on machine learning can enable accurate and fast molecular property predictions, which is of interest in drug discovery and material design. Various supervised machine learning models have demonstrated promising performance, but the vast chemical space and the limited availability of property labels make supervised learning challenging. Recently, unsupervised transformer-based language models pretrained on a large unlabelled corpus have produced state-of-the-art results in many downstream natural language processing tasks. Inspired by this development, we present molecular embeddings obtained by training an efficient transformer encoder model, MoLFormer, which uses rotary positional embeddings. This model employs a linear attention mechanism, coupled with highly distributed training, on SMILES sequences of 1.1 billion unlabelled molecules from the PubChem and ZINC datasets. We show that the learned molecular representation outperforms existing baselines, including supervised and self-supervised graph neural networks and language models, on several downstream tasks from ten benchmark datasets. They perform competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule. These results provide encouraging evidence that large-scale molecular language models can capture sufficient chemical and structural information to predict various distinct molecular properties, including quantum-chemical properties.
translated by 谷歌翻译
Multiple studies have focused on predicting the prospective popularity of an online document as a whole, without paying attention to the contributions of its individual parts. We introduce the task of proactively forecasting popularities of sentences within online news documents solely utilizing their natural language content. We model sentence-specific popularity forecasting as a sequence regression task. For training our models, we curate InfoPop, the first dataset containing popularity labels for over 1.7 million sentences from over 50,000 online news documents. To the best of our knowledge, this is the first dataset automatically created using streams of incoming search engine queries to generate sentence-level popularity annotations. We propose a novel transfer learning approach involving sentence salience prediction as an auxiliary task. Our proposed technique coupled with a BERT-based neural model exceeds nDCG values of 0.8 for proactive sentence-specific popularity forecasting. Notably, our study presents a non-trivial takeaway: though popularity and salience are different concepts, transfer learning from salience prediction enhances popularity forecasting. We release InfoPop and make our code publicly available: https://github.com/sayarghoshroy/InfoPopularity
translated by 谷歌翻译